64 research outputs found

    Real-Time Trigger and online Data Reduction based on Machine Learning Methods for Particle Detector Technology

    Get PDF
    Moderne Teilchenbeschleuniger-Experimente generieren während zur Laufzeit immense Datenmengen. Die gesamte erzeugte Datenmenge abzuspeichern, überschreitet hierbei schnell das verfügbare Budget für die Infrastruktur zur Datenauslese. Dieses Problem wird üblicherweise durch eine Kombination von Trigger- und Datenreduktionsmechanismen adressiert. Beide Mechanismen werden dabei so nahe wie möglich an den Detektoren platziert um die gewünschte Reduktion der ausgehenden Datenraten so frühzeitig wie möglich zu ermöglichen. In solchen Systeme traditionell genutzte Verfahren haben währenddessen ihre Mühe damit eine effiziente Reduktion in modernen Experimenten zu erzielen. Die Gründe dafür liegen zum Teil in den komplexen Verteilungen der auftretenden Untergrund Ereignissen. Diese Situation wird bei der Entwicklung der Detektorauslese durch die vorab unbekannten Eigenschaften des Beschleunigers und Detektors während des Betriebs unter hoher Luminosität verstärkt. Aus diesem Grund wird eine robuste und flexible algorithmische Alternative benötigt, welche von Verfahren aus dem maschinellen Lernen bereitgestellt werden kann. Da solche Trigger- und Datenreduktion-Systeme unter erschwerten Bedingungen wie engem Latenz-Budget, einer großen Anzahl zu nutzender Verbindungen zur Datenübertragung und allgemeinen Echtzeitanforderungen betrieben werden müssen, werden oft FPGAs als technologische Basis für die Umsetzung genutzt. Innerhalb dieser Arbeit wurden mehrere Ansätze auf Basis von FPGAs entwickelt und umgesetzt, welche die vorherrschenden Problemstellungen für das Belle II Experiment adressieren. Diese Ansätze werden über diese Arbeit hinweg vorgestellt und diskutiert werden

    Reconfigurable FPGA-Based Channelization Using Polyphase Filter Banks for Quantum Computing Systems

    Get PDF
    Recently proposed quantum systems use frequency multiplexed qubit technology for readout electronics rather than analog circuitry, to increase cost effectiveness of the system. In order to restore individual channels for further processing, these systems require a demultiplexing or channelization approach which can process high data rates with low latency and uses few hardware resources. In this paper, a low latency, adaptable, FPGA-based channelizer using the Polyphase Filter Bank (PFB) signal processing algorithm is presented. As only a single prototype lowpass filter needs to be designed to process all channels, PFBs can be easily adapted to different requirements and further allow for simplified filter design. Due to reutilization of the same filter for each channel they also reduce hardware resource utilization when compared to the traditional Digital Down Conversion approach. The realized system architecture is extensively generic, allowing the user to select from different numbers of channels, sample bit widths and throughput specifications. For a test setup using a 28 coefficient transpose filter and 4 output channels, the proposed architecture yields a throughput of 12.8 Gb/s with a latency of 7 clock cycles

    The H1 Forward Proton Spectrometer at HERA

    Full text link
    The forward proton spectrometer is part of the H1 detector at the HERA collider. Protons with energies above 500 GeV and polar angles below 1 mrad can be detected by this spectrometer. The main detector components are scintillating fiber detectors read out by position-sensitive photo-multipliers. These detectors are housed in so-called Roman Pots which allow them to be moved close to the circulating proton beam. Four Roman Pot stations are located at distances between 60 m and 90 m from the interaction point.Comment: 20 pages, 10 figures, submitted to Nucl.Instr.and Method

    The high-rate data challenge: computing for the CBM experiment

    Get PDF
    The Compressed Baryonic Matter experiment (CBM) is a next-generation heavy-ion experiment to be operated at the FAIR facility, currently under construction in Darmstadt, Germany. A key feature of CBM is very high interaction rate, exceeding those of contemporary nuclear collision experiments by several orders of magnitude. Such interaction rates forbid a conventional, hardware-triggered readout; instead, experiment data will be freely streaming from self-triggered front-end electronics. In order to reduce the huge raw data volume to a recordable rate, data will be selected exclusively on CPU, which necessitates partial event reconstruction in real-time. Consequently, the traditional segregation of online and offline software vanishes; an integrated on- and offline data processing concept is called for. In this paper, we will report on concepts and developments for computing for CBM as well as on the status of preparations for its first physics run

    A precision device needs precise simulation: Software description of the CBM Silicon Tracking System

    Get PDF
    Precise modelling of detectors in simulations is the key to the understanding of their performance, which, in turn, is a prerequisite for the proper design choice and, later, for the achievement of valid physics results. In this report, we describe the implementation of the Silicon Tracking System (STS), the main tracking device of the CBM experiment, in the CBM software environment. The STS makes uses of double-sided silicon micro-strip sensors with double metal layers. We present a description of transport and detector response simulation, including all relevant physical effects like charge creation and drift, charge collection, cross-talk and digitization. Of particular importance and novelty is the description of the time behavior of the detector, since its readout will not be externally triggered but continuous. We also cover some aspects of local reconstruction, which in the CBM case has to be performed in real-time and thus requires high-speed algorithms

    Performance for proton anisotropic flow measurement of the CBM experiment at FAIR

    Get PDF
    The Compressed Baryonic Matter experiment (CBM) performance for proton anisotropic flow measurements is studied with Monte-Carlo simulations using collisions of gold ions at lab momentum of 12A GeV/c employing DCM-QGSM-SMM heavy-ion event generator. Realistic procedures are used for centrality estimation with the number of registered tracks and particle identification with information from Time-Of-Flight detector. Variation of directed flow estimates depending on various combinations of PSD modules is used to evaluate possible systematic biases due to collision symmetry plane estimation

    Using multiplicity of produced particles for centrality determination in heavy-ion collisions with the CBM experiment

    Get PDF
    The evolution of matter created in a heavy-ion collision depends on its initial geometry. Experimentally collision geometry is characterized with centrality. Procedure of centrality determination for the Compressed Baryonic Matter (CBM) experiment at FAIR is presented. Relation between parameters of the collision geometry (such as impact parameter magnitude) and centrality classes is extracted using multiplicity of produced charged particles. The latter is connected to the collision geometry parameters using Monte-Carlo Glauber approach
    corecore